Predictive performance of
Covid-19 forecasts


Sebastian Funk (@sbfnk)
https://epiforecasts.io

6 October, 2022
JUNIPER seminar

Acknowledgements

EpiForecasts group (https://epiforecasts.io)
Akira Endo, Hannah Choi, James Munday,
Kath Sherratt, Nikos Bosse, Sam Abbott, Sophie Meakin

Johannes Bracher, Nick Reich and other collaborators

Mechanistic models support causal understanding, but predictions can have value in their own right

Desai et al., Health Secur, 2019
Keeling et al., Stat Meth Med Res, 2021
Cramer et al., Scientific Data, 2021

Short-term forecasts can inform decision making

  • Anticipate healthcare demand
  • Inform interventions
  • Support clinical trials

Being able to predict does not mean we can explain. For that, we needs a mechanistic model.

Metcalf & Lessler, Science, 2017

What we did

Forecasting via the renewal equation

\begin{align} \textrm{New infections}~I(t) & = R_t \sum_{\tau} g_{\tau} I_{t-\tau}\\ \textrm{Reproduction number}~R(t) & = R_{t-1} \exp(\textrm{GP})\\ \textrm{Delayed reporting}~D(t) & = \sum_\tau \xi_\tau I_{t-\tau} \\ \textrm{Observations}~C(t) & = \mathrm{NegBin}(D_t \omega_{(t~\textrm{mod}~7)}, \phi) \end{align}

https://epiforecasts.io/EpiNow2/

Global COVID case forecasts via the renewal equation

Abbott et al., Wellcome Open Res, 2020
Gostic et al., PLoS Comp Biol, 2021
https://epiforecasts.io/posts/2022-03-25-rt-reflections

Forecasting to inform policy in the UK

Funk et al., medRxiv, 2021
Medley, Adv Biol Regul, 2022

COVID forecast hubs

Reich et al., Am J Public Health, 2022
https://covid19forecasthub.org
https://covid19forecasthub.eu

  1. How good were COVID forecasts?
  2. How could COVID forecasts be improved?
  3. What can we conclude for the next pandemic?

How good were COVID forecasts?

We can compare forecasts using
proper scoring rules, e.g. \[\mathrm{CRPS}(F, x) = \mathbb{E}|X-x| - \frac{1}{2}\mathbb{E}|X-X'|\]

Gneiting and Raftery, J R Statist Soc B, 2007

Forecast hubs:
median ensemble outperforms individual models

Sherratt et al., medRxiv, 2022

We can compare forecasts using proper scoring rules, e.g. \[\mathrm{CRPS}(F, x) = \mathbb{E}|X-x| - \frac{1}{2}\mathbb{E}|X-X'|\] but these only tell us about relative quality of forecasts

Absolute quality of forecasts #1: baseline models

Absolute quality of forecasts #2: human vs. machine

Bosse et al., PLOS Comp Biol, 2022

Absolute quality of forecasts #3: calibration

Sherratt et al., medRxiv, 2022

How could COVID forecasts be improved?

Ensembles: learn from past performance

Sherratt et al., medRxiv, 2022

Including possible predictors #1: leading surveillance

Meakin et al., BMC Medicine, 2021

Including possible predictors #2: observed behaviour

Munday et al., in prep

Including possible predictors #3: variants

https://github.com/epiforecasts/forecast.vocs
https://github.com/jbracher/branching_process_delta

What can we conclude for the next pandemic?

Summary

  • Good Covid-19 forecasts have been elusive further than one or two generations ahead
  • Ensembles perform best, but can be difficult to interpret

Open questions:

  • Can predictive performance be improved?
  • Are we measuring predictive performance in the right way?

Forecasting and nowcasting will remain relevant

UKHSA, 2022

Nowcasting of delayed reports

https://covid19nowcasthub.de
https://epiforecasts.io/epinowcast/ https://github.com/epiforecasts/nowcasting.example

New initiatives

We need collaborative efforts, using standardised datasets to compare methods and generating sustainable tools

“We were losing ourselves in details […] all we needed to know is, are the number of cases rising, falling or levelling off?”

Hans Rosling, Liberia, 2014